As a sample application of the DFT, consider polynomial multiplication.
Given two polynomials \( f(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1 x + a_0\) and \(g(x) = b_mx^m + b_{m-1}x^{m-1} + \cdots + b_1 x + b_0,\) call the coefficient vectors \( (a_0,a_1,\cdots)\) and \( (b_0,b_1,\cdots)\) \(\bf a\) and \( \bf b\), respectively. The product \( f(x)g(x)\) will have degree \(n+m\) with the coefficients given by the convolution vector \( {\bf a} * {\bf b},\) with
\[({\bf a} * {\bf b})_k = \sum_{c=0}^k a_c b_{k-c}.\]
It is convenient to extend the vectors \( \bf a\) and \( \bf b\) to a common space \( {\mathbb C}^N\) by padding them with extra \(0\)s: take a value of \(N\) larger than \(m+n,\) and let \( a_x = b_y = 0\) if \(x\) or \(y\) is larger than \(n\) or \(m\), respectively. (For FFT applications it is often best to let \(N\) be a power of 2.) Then the beautiful fact about convolution and the DFT is
The DFT of \( {\bf a} * {\bf b} \) is the componentwise product of the DFT of \( \bf a \) and the DFT of \( \bf b\).
The proof of this fact is straightforward and can be found in most standard references.
So multiplying \(f(x)\) and \(g(x)\) can be accomplished by padding the coefficient vectors, computing their DFTs, multiplying the DFTs, and applying the inverse DFT to the result.
Find \( (1+x)\big(1+x+x^2\big)\) using the DFT.
Pad the coordinate vectors to \( (1,1,0,0) \) and \( (1,1,1,0).\) The DFT of \((1,1,0,0)\) is \( (2,1-i, 0, 1+i) \) \(\big(\)this follows from the two examples above, since the DFT is additive and we know the DFTs of \( (1,0,0,0) \) and \( (0,1,0,0)\big),\) and the DFT of \( (1,1,1,0) \) is \( (3,-i,1,i).\) The coordinatewise product of these two is \( (6,-1-i,0,-1+i),\) and the inverse DFT of this vector is \( (1,2,2,1).\) So the product is \( 1+2x+2x^2+x^3.\) \(_\square\)
This may seem like a roundabout way to accomplish a simple polynomial multiplication, but in fact it is quite efficient due to the existence of a fast Fourier transform (FFT). The point is that a normal polynomial multiplication requires \( O(N^2)\) multiplications of integers, while the coordinatewise multiplication in this algorithm requires only \( O(N)\) multiplications. The FFT algorithm is \( O(N \log N),\) so the polynomial multiplication algorithm which uses the FFT is \( O(N \log N)\) as well.
Multiplying large integers is done in the same way: an integer like \( 1408\) can be viewed as the evaluation of the polynomial \( 8 + 4x^2+x^3\) at \(x=10,\) so multiplying integers can be reduced to multiplying polynomials (and then evaluating at 10, or whatever base is most convenient).